Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35.990
Filtrar
1.
Sensors (Basel) ; 24(6)2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38544148

RESUMO

Parkinson's disease is one of the major neurodegenerative diseases that affects the postural stability of patients, especially during gait initiation. There is actually an increasing demand for the development of new non-pharmacological tools that can easily classify healthy/affected patients as well as the degree of evolution of the disease. The experimental characterization of gait initiation (GI) is usually done through the simultaneous acquisition of about 20 variables, resulting in very large datasets. Dimension reduction tools are therefore suitable, considering the complexity of the physiological processes involved. The principal Component Analysis (PCA) is very powerful at reducing the dimensionality of large datasets and emphasizing correlations between variables. In this paper, the Principal Component Analysis (PCA) was enhanced with bootstrapping and applied to the study of the GI to identify the 3 majors sets of variables influencing the postural control disability of Parkinsonian patients during GI. We show that the combination of these methods can lead to a significant improvement in the unsupervised classification of healthy/affected patients using a Gaussian mixture model, since it leads to a reduced confidence interval on the estimated parameters. The benefits of this method for the identification and study of the efficiency of potential treatments is not addressed in this paper but could be addressed in future works.


Assuntos
Transtornos Neurológicos da Marcha , Doença de Parkinson , Humanos , Análise de Componente Principal , Intervalos de Confiança , Doença de Parkinson/terapia , Marcha/fisiologia , Equilíbrio Postural/fisiologia
2.
Genet Sel Evol ; 56(1): 18, 2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38459504

RESUMO

BACKGROUND: Validation by data truncation is a common practice in genetic evaluations because of the interest in predicting the genetic merit of a set of young selection candidates. Two of the most used validation methods in genetic evaluations use a single data partition: predictivity or predictive ability (correlation between pre-adjusted phenotypes and estimated breeding values (EBV) divided by the square root of the heritability) and the linear regression (LR) method (comparison of "early" and "late" EBV). Both methods compare predictions with the whole dataset and a partial dataset that is obtained by removing the information related to a set of validation individuals. EBV obtained with the partial dataset are compared against adjusted phenotypes for the predictivity or EBV obtained with the whole dataset in the LR method. Confidence intervals for predictivity and the LR method can be obtained by replicating the validation for different samples (or folds), or bootstrapping. Analytical confidence intervals would be beneficial to avoid running several validations and to test the quality of the bootstrap intervals. However, analytical confidence intervals are unavailable for predictivity and the LR method. RESULTS: We derived standard errors and Wald confidence intervals for the predictivity and statistics included in the LR method (bias, dispersion, ratio of accuracies, and reliability). The confidence intervals for the bias, dispersion, and reliability depend on the relationships and prediction error variances and covariances across the individuals in the validation set. We developed approximations for large datasets that only need the reliabilities of the individuals in the validation set. The confidence intervals for the ratio of accuracies and predictivity were obtained through the Fisher transformation. We show the adequacy of both the analytical and approximated analytical confidence intervals and compare them versus bootstrap confidence intervals using two simulated examples. The analytical confidence intervals were closer to the simulated ones for both examples. Bootstrap confidence intervals tend to be narrower than the simulated ones. The approximated analytical confidence intervals were similar to those obtained by bootstrapping. CONCLUSIONS: Estimating the sampling variation of predictivity and the statistics in the LR method without replication or bootstrap is possible for any dataset with the formulas presented in this study.


Assuntos
Genômica , Modelos Genéticos , Humanos , Genótipo , Reprodutibilidade dos Testes , Intervalos de Confiança , Linhagem , Genômica/métodos , Fenótipo
3.
Clin Pharmacokinet ; 63(3): 343-355, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38361163

RESUMO

BACKGROUND AND OBJECTIVE: With the rise in the use of physiologically based pharmacokinetic (PBPK) modeling over the past decade, the use of PBPK modeling to underpin drug dosing for off-label use in clinical care has become an attractive option. In order to use PBPK models for high-impact decisions, thorough qualification and validation of the model is essential to gain enough confidence in model performance. Currently, there is no agreed method for model acceptance, while clinicians demand a clear measure of model performance before considering implementing PBPK model-informed dosing. We aim to bridge this gap and propose the use of a confidence interval for the predicted-to-observed geometric mean ratio with predefined boundaries. This approach is similar to currently accepted bioequivalence testing procedures and can aid in improved model credibility and acceptance. METHODS: Two different methods to construct a confidence interval are outlined, depending on whether individual observations or aggregate data are available from the clinical comparator data sets. The two testing procedures are demonstrated for an example evaluation of a midazolam PBPK model. In addition, a simulation study is performed to demonstrate the difference between the twofold criterion and our proposed method. RESULTS: Using midazolam adult pharmacokinetic data, we demonstrated that creating a confidence interval yields more robust evaluation of the model than a point estimate, such as the commonly used twofold acceptance criterion. Additionally, we showed that the use of individual predictions can reduce the number of required test subjects. Furthermore, an easy-to-implement software tool was developed and is provided to make our proposed method more accessible. CONCLUSIONS: With this method, we aim to provide a tool to further increase confidence in PBPK model performance and facilitate its use for directly informing drug dosing in clinical care.


Assuntos
Midazolam , Modelos Biológicos , Adulto , Humanos , Midazolam/farmacocinética , Intervalos de Confiança , Simulação por Computador , Software
4.
Stat Med ; 43(8): 1577-1603, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38339872

RESUMO

Due to the dependency structure in the sampling process, adaptive trial designs create challenges in point and interval estimation and in the calculation of P-values. Optimal adaptive designs, which are designs where the parameters governing the adaptivity are chosen to maximize some performance criterion, suffer from the same problem. Various analysis methods which are able to handle this dependency structure have already been developed. In this work, we aim to give a comprehensive summary of these methods and show how they can be applied to the class of designs with planned adaptivity, of which optimal adaptive designs are an important member. The defining feature of these kinds of designs is that the adaptive elements are completely prespecified. This allows for explicit descriptions of the calculations involved, which makes it possible to evaluate different methods in a fast and accurate manner. We will explain how to do so, and present an extensive comparison of the performance characteristics of various estimators between an optimal adaptive design and its group-sequential counterpart.


Assuntos
Projetos de Pesquisa , Humanos , Intervalos de Confiança , Tamanho da Amostra
6.
Stat Methods Med Res ; 33(3): 465-479, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38348637

RESUMO

The weighted sum of binomial proportions and the interaction effect are two important cases of the linear combination of binomial proportions. Existing confidence intervals for these two parameters are approximate. We apply the h-function method to a given approximate interval and obtain an exact interval. The process is repeated multiple times until the final-improved interval (exact) cannot be shortened. In particular, for the weighted sum of two proportions, we derive two final-improved intervals based on the (approximate) adjusted score and fiducial intervals. After comparing several currently used intervals, we recommend these two final-improved intervals for practice. For the weighted sum of three proportions and the interaction effect, the final-improved interval based on the adjusted score interval should be used. Three real datasets are used to detail how the approximate intervals are improved.


Assuntos
Modelos Estatísticos , Distribuição Binomial , Intervalos de Confiança
7.
JASA Express Lett ; 4(2)2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38299985

RESUMO

Confidence intervals of location (CIL) of calling marine mammals, derived from time-differences-of-arrival (TDOA) between receivers, depend on errors of TDOAs, receiver location, clocks, and sound speeds. Simulations demonstrate a time-differences-of-arrival-beamforming-locator (TDOA-BL) yields CIL in error by O(10-100) km for experimental scenarios because it is not designed to account for relevant errors. The errors are large and sometimes exceed the distances of detection. Another locator designed for all errors, sequential bound estimation, yields CIL always containing the true location. TDOA-BL have and are being used to understand potential effects of environmental stress on marine mammals; a use worth reconsidering.


Assuntos
Caniformia , Animais , Intervalos de Confiança , Cetáceos , Som
8.
Pharmacoepidemiol Drug Saf ; 33(2): e5750, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38362649

RESUMO

PURPOSE: Outcome variables that are assumed to follow a negative binomial distribution are frequently used in both clinical and epidemiological studies. Epidemiological studies, particularly those performed by pharmaceutical companies often aim to describe a population rather than compare treatments. Such descriptive studies are often analysed using confidence intervals. While precision calculations and sample size calculations are not always performed in these settings, they have the important role of setting expectations of what results the study may generate. Current methods for precision calculations for the negative binomial rate are based on plugging in parameter values into the confidence interval formulae. This method has the downside of ignoring the randomness of the confidence interval limits. To enable better practice for precision calculations, methods are needed that address the randomness. METHODS: Using the well-known delta-method we develop a method for calculating the precision probability, that is, the probability of achieving a certain width. We assess the performance of the method in smaller samples through simulations. RESULTS: The method for the precision probability performs well in small to medium sample sizes, and the usefulness of the method is demonstrated through an example. CONCLUSIONS: We have developed a simple method for calculating the precision probability for negative binomial rates. This method can be used when planning epidemiological studies in for example, asthma, while correctly taking the randomness of confidence intervals into account.


Assuntos
Modelos Estatísticos , Humanos , Tamanho da Amostra , Probabilidade , Distribuição Binomial , Intervalos de Confiança
9.
Artigo em Inglês | MEDLINE | ID: mdl-38397697

RESUMO

Health disparities are differences in health status across different socioeconomic groups. Classical methods, e.g., the Delta method, have been used to estimate the standard errors of estimated measures of health disparities and to construct confidence intervals for these measures. However, the confidence intervals constructed using the classical methods do not have good coverage properties for situations involving sparse data. In this article, we introduce three new methods to construct fiducial intervals for measures of health disparities based on approximate fiducial quantities. Through a comprehensive simulation study, We compare the empirical coverage properties of the proposed fiducial intervals against two Monte Carlo simulation-based methods-utilizing either a truncated Normal distribution or the Gamma distribution-as well as the classical method. The findings of the simulation study advocate for the adoption of the Monte Carlo simulation-based method with the Gamma distribution when a unified approach is sought for all health disparity measures.


Assuntos
Iniquidades em Saúde , Intervalos de Confiança , Simulação por Computador , Distribuição Normal , Método de Monte Carlo
10.
Arthroscopy ; 40(3): 1006-1008, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38219106

RESUMO

The Fragility Index (FI) provides the number of patients whose outcome would need to have changed for the results of a clinical trial to no longer be statistically significant. Although it's a well-intended and easily interpreted metric, its calculation is based on reversing a significant finding and therefore its interpretation is only relevant in the domain of statistical significance. Its interpretation is only relevant in the domain of statistical significance. A well-designed clinical trial includes an a priori sample size calculation that aims to find the bare minimum of patients needed to obtain statistical significance. Such trials are fragile by design! Examining the robustness of clinical trials requires an estimation of uncertainty, rather than a misconstrued, dichotomous focus on statistical significance. Confidence intervals (CIs) provide a range of values that are compatible with a study's data and help determine the precision of results and the compatibility of the data with different hypotheses. The width of the CI speaks to the precision of the results, and the extent to which the values contained within have potential to be clinically important. Finally, one should not assume that a large FI indicates robust findings. Poorly executed trials are prone to bias, leading to large effects, and therefore, small P values, and a large FI. Let's move our future focus from the FI toward the CI.


Assuntos
Ensaios Clínicos como Assunto , Intervalos de Confiança , Humanos , Viés , Tamanho da Amostra
11.
Contemp Clin Trials ; 138: 107453, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38253253

RESUMO

BACKGROUND: Clinical trials often include interim analyses of the proportion of participants experiencing an event by a fixed time-point. A pre-specified proportion excluded from a corresponding confidence interval (CI) may lead an independent monitoring committee to recommend stopping the trial. Frequently this cumulative proportion is estimated by the Kaplan-Meier estimator with a Wald approximate CI, which may have coverage issues with small samples. METHODS: We reviewed four alternative CI methods for cumulative proportions (Beta Product Confidence Procedure (BPCP), BPCP Mid P, Rothman-Wilson, Thomas-Grunkemeier) and two CI methods for simple proportions (Clopper-Pearson, Wilson). We conducted a simulation study comparing CI methods across true event proportions for 12 scenarios differentiated by sample sizes and censoring patterns. We re-analyzed interim data from A5340, a HIV cure trial considering the proportion of participants experiencing virologic failure. RESULTS: Our simulation study highlights the lower and upper tail error probabilities for each CI method. Across scenarios, we found differences in the performance of lower versus upper bounds. No single method is always preferred. The upper bound of a Wald approximate CI performed reasonably with some error inflation, whereas the lower bound of the BPCP Mid P method performed well. For a trial design similar to A5340, we recommend BPCP Mid P. CONCLUSIONS: The design of future single-arm interim analyses of event proportions should consider the most appropriate CI method based on the relevant bound, anticipated sample size and event proportion. Our paper summarizes available methods, demonstrates performance in a simulation study, and includes code for implementation.


Assuntos
Projetos de Pesquisa , Humanos , Intervalos de Confiança , Tamanho da Amostra , Simulação por Computador , Análise de Sobrevida
13.
Stat Med ; 43(3): 606-623, 2024 02 10.
Artigo em Inglês | MEDLINE | ID: mdl-38038216

RESUMO

Tuberculosis (TB) studies often involve four different states under consideration, namely, "healthy," "latent infection," "pulmonary active disease," and "extra-pulmonary active disease." While highly accurate clinical diagnosis tests do exist, they are expensive and generally not accessible in regions where they are most needed; thus, there is an interest in assessing the accuracy of new and easily obtainable biomarkers. For some such biomarkers, the typical stochastic ordering assumption might not be justified for all disease classes under study, and usual ROC methodologies that involve ROC surfaces and hypersurfaces are inadequate. Different types of orderings may be appropriate depending on the setting, and these may involve a number of ambiguously ordered groups that stochastically exhibit larger (or lower) marker scores than the remaining groups. Recently, there has been scientific interest on ROC methods that can accommodate these so-called "tree" or "umbrella" orderings. However, there is limited work discussing the estimation of cutoffs in such settings. In this article, we discuss the estimation and inference around optimized cutoffs when accounting for such configurations. We explore different cutoff alternatives and provide parametric, flexible parametric, and non-parametric kernel-based approaches for estimation and inference. We evaluate our approaches using simulations and illustrate them through a real data set that involves TB patients.


Assuntos
Biomarcadores , Intervalos de Confiança , Humanos
14.
Stat Methods Med Res ; 33(1): 42-60, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38055982

RESUMO

The problem of finding confidence intervals based on data from several independent studies or experiments is considered. A general method of finding confidence intervals by inverting a combined test is proposed. The combined tests considered are the Fisher test, the weighted inverse normal test, the inverse chi-square test and the inverse Cauchy test. The method is illustrated for finding confidence intervals for a common mean of several normal populations, common correlation coefficient of several bivariate normal populations, common coefficient of variation, common mean of several lognormal populations, and for a common mean of several gamma populations. For each case, the confidence intervals based on the combined tests are compared with the other available approximate confidence intervals with respect to coverage probability and precision. R functions to compute all confidence intervals are provided in a supplementary file. The methods are illustrated using several practical examples.


Assuntos
Funções Verossimilhança , Intervalos de Confiança
15.
Theor Popul Biol ; 155: 1-9, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38000513

RESUMO

By quantifying key life history parameters in populations, such as growth rate, longevity, and generation time, researchers and administrators can obtain valuable insights into its dynamics. Although point estimates of demographic parameters have been available since the inception of demography as a scientific discipline, the construction of confidence intervals has typically relied on approximations through series expansions or computationally intensive techniques. This study introduces the first mathematical expression for calculating confidence intervals for the aforementioned life history traits when individuals are unidentifiable and data are presented as a life table. The key finding is the accurate estimation of the confidence interval for r, the instantaneous growth rate, which is tested using Monte Carlo simulations with four arbitrary discrete distributions. In comparison to the bootstrap method, the proposed interval construction method proves more efficient, particularly for experiments with a total offspring size below 400. We discuss handling cases where data are organized in extended life tables or as a matrix of vital rates. We have developed and provided accompanying code to facilitate these computations.


Assuntos
Longevidade , Crescimento Demográfico , Humanos , Intervalos de Confiança , Dinâmica Populacional , Tábuas de Vida
16.
J Biopharm Stat ; 34(1): 127-135, 2024 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-36710407

RESUMO

The paper provides computations comparing the accuracy of the saddlepoint approximation approach and the normal approximation method in approximating the mid-p-value of Wilcoxon and log-rank tests for the left-truncated data using a truncated binomial design. The paper uses real data examples to apply the comparison, along with some simulated studies. Confidence intervals are provided by the inversion of the tests under consideration.


Assuntos
Intervalos de Confiança , Humanos , Tamanho da Amostra
19.
Rev. saúde pública (Online) ; 58: 01, 2024. graf
Artigo em Inglês | LILACS | ID: biblio-1536768

RESUMO

ABSTRACT OBJECTIVE This study aims to propose a comprehensive alternative to the Bland-Altman plot method, addressing its limitations and providing a statistical framework for evaluating the equivalences of measurement techniques. This involves introducing an innovative three-step approach for assessing accuracy, precision, and agreement between techniques, which enhances objectivity in equivalence assessment. Additionally, the development of an R package that is easy to use enables researchers to efficiently analyze and interpret technique equivalences. METHODS Inferential statistics support for equivalence between measurement techniques was proposed in three nested tests. These were based on structural regressions with the goal to assess the equivalence of structural means (accuracy), the equivalence of structural variances (precision), and concordance with the structural bisector line (agreement in measurements obtained from the same subject), using analytical methods and robust approach by bootstrapping. To promote better understanding, graphical outputs following Bland and Altman's principles were also implemented. RESULTS The performance of this method was shown and confronted by five data sets from previously published articles that used Bland and Altman's method. One case demonstrated strict equivalence, three cases showed partial equivalence, and one showed poor equivalence. The developed R package containing open codes and data are available for free and with installation instructions at Harvard Dataverse at https://doi.org/10.7910/DVN/AGJPZH. CONCLUSION Although easy to communicate, the widely cited and applied Bland and Altman plot method is often misinterpreted, since it lacks suitable inferential statistical support. Common alternatives, such as Pearson's correlation or ordinal least-square linear regression, also fail to locate the weakness of each measurement technique. It may be possible to test whether two techniques have full equivalence by preserving graphical communication, in accordance with Bland and Altman's principles, but also adding robust and suitable inferential statistics. Decomposing equivalence into three features (accuracy, precision, and agreement) helps to locate the sources of the problem when fixing a new technique.


Assuntos
Intervalos de Confiança , Análise de Regressão , Interpretação Estatística de Dados , Inferência Estatística , Confiabilidade dos Dados
20.
PLoS One ; 18(11): e0293640, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37917602

RESUMO

When data is derived under a single or multiple lower limits of quantification (LLOQ), estimation of distribution parameters as well as precision of these estimates appear to be challenging, as the way to account for unquantifiable observations due to LLOQs needs particular attention. The aim of this investigation is to characterize the precision of censored sample maximum likelihood estimates of the mean for normal, exponential and Poisson distribution affected by one or two LLOQs using confidence intervals (CI). In a simulation study, asymptotic and bias-corrected accelerated bootstrap CIs for the location parameter mean are compared with respect to coverage proportion and interval width. To enable this examination, we derived analytical expressions of the maximum likelihood location parameter estimate for the assumption of exponentially and Poisson distributed data, where the censored sample method and simple imputation method are used to account for LLOQs. Additionally, we vary the proportion of observations below the LLOQs. When based on the censored sample estimate, the bootstrap CI led to higher coverage proportions and narrower interval width than the asymptotic CI. The results differed by underlying distribution. Under the assumption of normality, the CI's coverage proportion and width suffered most from high proportions of unquantifiable observations. For exponentially and Poisson distributed data, both CI approaches delivered similar results. To derive the CIs, the point estimates from the censored sample method are preferable, because the point estimate of the simple imputation method leads to higher bias for all investigated distributions. This biased simple imputation estimate impairs the coverage proportion of the respective CI. The bootstrap CI surpassed the asymptotic CIs with respect to coverage proportion for the investigated choice of distributional assumptions. The variety of distributions for which the methods are suitable gives the applicant a widely usable tool to handle LLOQ affected data with appropriate approaches.


Assuntos
Modelos Estatísticos , Funções Verossimilhança , Intervalos de Confiança , Simulação por Computador , Viés
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...